我们假设现有的句子级机器翻译(MT)指标在人类参考包含歧义时会效率降低。为了验证这一假设,我们提出了一种非常简单的方法,用于扩展预审计的指标以在文档级别合并上下文。我们将我们的方法应用于三个流行的指标,即Bertscore,Prism和Comet,以及无参考的公制Comet-QE。我们使用提供的MQM注释评估WMT 2021指标共享任务的扩展指标。我们的结果表明,扩展指标的表现在约85%的测试条件下优于其句子级别的级别,而在排除低质量人类参考的结果时。此外,我们表明我们的文档级扩展大大提高了其对话语现象任务的准确性,从而优于专用基线高达6.1%。我们的实验结果支持我们的初始假设,并表明对指标的简单扩展使他们能够利用上下文来解决参考中的歧义。
translated by 谷歌翻译
Sockeye 3是神经机器翻译(NMT)的Mockeye工具包的最新版本。现在,基于Pytorch,Sockeye 3提供了更快的模型实现和更高级的功能,并具有进一步的简化代码库。这可以通过更快的迭代,对更强大,更快的模型进行有效的培训以及快速从研究转移到生产的新想法的灵活性,从而实现更广泛的实验。当运行可比较的型号时,Sockeye 3的速度比GPU上的其他Pytorch实现快126%,在CPU上的实现速度高达292%。Sockeye 3是根据Apache 2.0许可发布的开源软件。
translated by 谷歌翻译
自动配音(AD)是翻译应适合给定长度模板的用例之一,以实现源代理和目标语音之间的同步性。对于神经机翻译(MT),在源长度接近源长度(例如,在+ -10%内的字符数内)产生翻译,同时保持质量是一个具有挑战性的任务。控制NMT输出长度为翻译质量的成本,通常用两步的生成N-Best假设的方法来减轻,然后基于长度和质量重新排序它们。这项工作引入了一种自学方法,允许变压器模型直接学习生成与源长度紧密匹配的输出,短等距MT。特别地,我们对等距MT的方法不需要生成多个假设,也不需要任何辅助评分函数。我们向3名语言对(英语 - 法语,意大利语,德语,西班牙语)报告结果,该结果与基于TED谈话数据的公开可用的基准。自动和手动评估都表明,我们的自学习方法与更复杂的等距MT方法进行了执行。
translated by 谷歌翻译
我们介绍了Prosody-Aware Machine翻译的任务,旨在产生适合配音的翻译。配音是口语句要求将内容传输以及源的韵律结构转移到目标语言中以保留时序信息。实际上,这意味着从源暂停到目标并确保目标语音段具有大致相同的源片段的暂停。在这项工作中,我们提出了一种隐含和明确的建模方法,将韵律信息整合到神经机翻译中。英语 - 德语/法语与自动指标的实验表明,最简单的考虑方法最佳。结果是通过人类评估的翻译和配音视频确认。
translated by 谷歌翻译
Self-supervised pre-trained transformers have improved the state of the art on a variety of speech tasks. Due to the quadratic time and space complexity of self-attention, they usually operate at the level of relatively short (e.g., utterance) segments. In this paper, we study the use of context, i.e., surrounding segments, during fine-tuning and propose a new approach called context-aware fine-tuning. We attach a context module on top of the last layer of a pre-trained model to encode the whole segment into a context embedding vector which is then used as an additional feature for the final prediction. During the fine-tuning stage, we introduce an auxiliary loss that encourages this context embedding vector to be similar to context vectors of surrounding segments. This allows the model to make predictions without access to these surrounding segments at inference time and requires only a tiny overhead compared to standard fine-tuned models. We evaluate the proposed approach using the SLUE and Librilight benchmarks for several downstream tasks: Automatic speech recognition (ASR), named entity recognition (NER), and sentiment analysis (SA). The results show that context-aware fine-tuning not only outperforms a standard fine-tuning baseline but also rivals a strong context injection baseline that uses neighboring speech segments during inference.
translated by 谷歌翻译
Decentralized bilevel optimization has received increasing attention recently due to its foundational role in many emerging multi-agent learning paradigms (e.g., multi-agent meta-learning and multi-agent reinforcement learning) over peer-to-peer edge networks. However, to work with the limited computation and communication capabilities of edge networks, a major challenge in developing decentralized bilevel optimization techniques is to lower sample and communication complexities. This motivates us to develop a new decentralized bilevel optimization called DIAMOND (decentralized single-timescale stochastic approximation with momentum and gradient-tracking). The contributions of this paper are as follows: i) our DIAMOND algorithm adopts a single-loop structure rather than following the natural double-loop structure of bilevel optimization, which offers low computation and implementation complexity; ii) compared to existing approaches, the DIAMOND algorithm does not require any full gradient evaluations, which further reduces both sample and computational complexities; iii) through a careful integration of momentum information and gradient tracking techniques, we show that the DIAMOND algorithm enjoys $\mathcal{O}(\epsilon^{-3/2})$ in sample and communication complexities for achieving an $\epsilon$-stationary solution, both of which are independent of the dataset sizes and significantly outperform existing works. Extensive experiments also verify our theoretical findings.
translated by 谷歌翻译
The domain of joint vision-language understanding, especially in the context of reasoning in Visual Question Answering (VQA) models, has garnered significant attention in the recent past. While most of the existing VQA models focus on improving the accuracy of VQA, the way models arrive at an answer is oftentimes a black box. As a step towards making the VQA task more explainable and interpretable, our method is built upon the SOTA VQA framework by augmenting it with an end-to-end explanation generation module. In this paper, we investigate two network architectures, including Long Short-Term Memory (LSTM) and Transformer decoder, as the explanation generator. Our method generates human-readable textual explanations while maintaining SOTA VQA accuracy on the GQA-REX (77.49%) and VQA-E (71.48%) datasets. Approximately 65.16% of the generated explanations are approved by humans as valid. Roughly 60.5% of the generated explanations are valid and lead to the correct answers.
translated by 谷歌翻译
Federated learning (FL) on deep neural networks facilitates new applications at the edge, especially for wearable and Internet-of-Thing devices. Such devices capture a large and diverse amount of data, but they have memory, compute, power, and connectivity constraints which hinder their participation in FL. We propose Centaur, a multitier FL framework, enabling ultra-constrained devices to efficiently participate in FL on large neural nets. Centaur combines two major ideas: (i) a data selection scheme to choose a portion of samples that accelerates the learning, and (ii) a partition-based training algorithm that integrates both constrained and powerful devices owned by the same user. Evaluations, on four benchmark neural nets and three datasets, show that Centaur gains ~10% higher accuracy than local training on constrained devices with ~58% energy saving on average. Our experimental results also demonstrate the superior efficiency of Centaur when dealing with imbalanced data, client participation heterogeneity, and various network connection probabilities.
translated by 谷歌翻译
现代社会有兴趣由于复杂的相机的激增而捕获高分辨率和优质图像。但是,如果在计算机视觉任务中使用了此类图像,则图像中的噪声污染不仅较低,而且相反会影响随后的过程,例如遥感,对象跟踪等。高分辨率图像的时间处理受图像捕获仪器的硬件限制的限制。 Geodesic Gramian denoising(GGD)是一种基于多种噪声滤波方法,我们在过去的研究中介绍了该方法,它利用了Geodesics的Gramian Gramian矩阵的一些突出的奇异向量进行噪声滤波过程。 GDD遇到$ \ MATHCAL {O}(n^6)$时,GDD的适用性受到限制^2 $数据矩阵由单数值分解(SVD)实现。在这项研究中,我们通过用四种不同的单数矢量近似技术代替其SVD步骤来提高GGD框架的效率。在这里,我们比较集成到GGD中的四个技术之间的计算时间和噪声过滤性能。
translated by 谷歌翻译
我们设计和分析了量子变压器,扩展了最先进的经典变压器神经网络体系结构,已知在自然语言处理和图像分析中表现出色。在先前用于数据加载和正交神经层的参数化量子电路的工作的基础上,我们引入了三种量子注意机制,包括基于复合矩阵的量子变压器。这些量子体系结构可以使用浅量子电路构建,并可以提供定性不同的分类模型。与最佳的经典变压器和其他经典基准相比,我们对标准医疗图像数据集进行了量子变压器的广泛模拟,这些量子变压器表现出竞争力,有时表现更好。与经典算法相对于分类图像的大小,我们的量子注意层的计算复杂性被证明是有利的。与拥有数百万参数的最佳经典方法相比,我们的量子体系结构具有数千个参数。最后,我们在超导量子计算机上实施了量子变压器,并获得了多达六个量子实验的令人鼓舞的结果。
translated by 谷歌翻译